Perturbation Techniques for Convergence Analysis of Proximal Gradient Method and Other First-Order Algorithms via Variational Analysis

نویسندگان

چکیده

We develop new perturbation techniques for conducting convergence analysis of various first-order algorithms a class nonsmooth optimization problems. consider the iteration scheme an algorithm to construct perturbed stationary point set-valued map, and define perturbing parameter by difference two consecutive iterates. Then, we show that calmness condition induced together with local version proper separation value condition, is sufficient ensure linear algorithm. The equivalence one canonically map proved, this allows us derive some conditions using recent developments in variational analysis. These are different from existing results (especially, those error-bound-based ones) they can be easily verified many concrete application models. Our focused on fundamental proximal gradient (PG) method, it enables any accumulation sequence generated PG method must terms subdifferential, instead limiting subdifferential. This result finds surprising fact solution quality found general superior. also leads improvement convex case. technique conveniently used rate number other methods including well-known alternating direction multipliers primal-dual hybrid under mild assumptions.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Variational Analysis of Stochastic Gradient Algorithms

Stochastic Gradient Descent (SGD) is an important algorithm in machine learning. With constant learning rates, it is a stochastic process that, after an initial phase of convergence, generates samples from a stationary distribution. We show that SGD with constant rates can be effectively used as an approximate posterior inference algorithm for probabilistic modeling. Specifically, we show how t...

متن کامل

Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization

In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex program...

متن کامل

Stability analysis of fractional-order nonlinear Systems via Lyapunov method

‎In this paper‎, ‎we study stability of fractional-order nonlinear dynamic systems by means of Lyapunov‎ ‎method‎. ‎To examine the obtained results‎, ‎we employe the developed techniques on test examples‎.

متن کامل

Convergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization

A. Proof of Theorem 1 We first recall the following lemma. Lemma 1 (Lemma 1, (Gong et al., 2013)). Under Assumption 1.{3}. For any η > 0 and any x,y ∈ R such that x = proxηg(y − η∇f(y)), one has that F (x) ≤ F (y)− ( 1 2η − L 2 )‖x− y‖ . Applying Lemma 1 with x = xk,y = yk, we obtain that F (xk) ≤ F (yk)− ( 1 2η − L 2 )‖xk − yk‖ . (12) Since η < 1 L , it follows that F (xk) ≤ F (yk). Moreover, ...

متن کامل

Convergence Analysis of Gradient Descent Stochastic Algorithms

This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the ca...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Set-valued and Variational Analysis

سال: 2021

ISSN: ['1877-0541', '1877-0533']

DOI: https://doi.org/10.1007/s11228-020-00570-0